Sunday, October 30th 2022

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.

CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.
Update Oct 26th: There are multiple updates to the story.

The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.

Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.

Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.

Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.

Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
Sources: Buildzoid (Twitter), reggie_gakil (Reddit), Hardware Busters (YouTube)
Add your own comment

230 Comments on PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

#76
rv8000
Solaris17Man; comments like this really bring nothing to the table. I cannot stand it when people do it. This is totally off topic but I just want to throw some things out really quick before my meeting.

If you update even every 2 years in MY experience in consumer land you spend pretty much the same amount of money if you keep up with your build as someone that blows it all at once.

I bought 2x 4090s. 2x z690s; including all the other parts, coolers, fans, cases, ram for 2x platform upgrades. All at once. I probably just dropped 1/4 of the salary of what some make here on this forum.

Because I saved. Since 2017. The week after I finished our x299 builds. For the next platform jump. 5 years.

I do not think and it shouldnt make me out to be or included in the demographic of people that are considered hardware snobs because I can drop 3x your mortgage on PC parts in one night and still eat dinner. Your logic is flawed.

Also, I LOVE porsches.








And they dont need to be $180k cars. You can choose to spend that much though if you want.


For the record. If it helps. I know a few others that do it like me. At the very least its a waste of your time (not sure you know how much thats worth yet) because what people like this think of how I spend my money doesn't affect how I sleep at night.
OT:

I agree comments like that are unnecessary.

Why then would make a post justifying how you spend your money. You’re making a post about the same topic, just arguing the other side. Adding nothing to the topic in an equal fashion, especially odd when “how someone spends there money” doesn’t affect how you sleep at night.
Posted on Reply
#77
Solaris17
Super Dainty Moderator
rv8000OT:

I agree comments like that are unnecessary.

Why then would make a post justifying how you spend your money. You’re making a post about the same topic, just arguing the other side. Adding nothing to the topic in an equal fashion, especially odd when “how someone spends there money” doesn’t affect how you sleep at night.
You should never pass up the opportunity to enlighten. Within reason of course. There is always the chance it expands his thinking.
Posted on Reply
#78
randomUser
What they basically told here is "we are making connectors as cheap as possible, so quality is very low thus plastic bends and tears easily, please don't touch it".
Posted on Reply
#79
Solaris17
Super Dainty Moderator
randomUserWhat they basically told here is "we are making connectors as cheap as possible, so quality is very low thus plastic bends and tears easily, please don't touch it".
You know the connector and decision was not nvidia's but I do have to wonder, what their engineering team was thinking. Certainly they were given the same kind of information (+35mm) I mean, someone SOMEWHERE had to have put this in a case and been like "Man we should idk, put this at the back of the card right??"
Posted on Reply
#80
Vayra86
the54thvoidI feel as though I'm banging my head into a brick wall.

It doesn't matter whether Molex is a PITA. It matters that people are using this to bash Nvidia as though it's their fault. AMD use the same mini-molex 6 and 8-pin connectors (from the PSU), which all follow certain standards--which is namely the 30 cycle mating. The 30 cycle thing is not the issue.

The issue is the shitty bend mechanics and pin contact failure.
The 30 cycles are another symptom of a business that is constantly eating away at what could be called 'headroom' at large.

Anyone with half a bit of sense would elevate this number because simply enough, GPUs definitely do get close to that limit or go over it. And these GPUs aren't dealing in low wattages like, say, HDDs do, or a bunch of fans. Here we are dealing with loads that are fat enough to make a PSU sweat. Also, and much more importantly, these cables aren't getting cheaper and certainly not dirt cheap like Molex is. We're talking about top of the line components here.

Similar things occur with, for example, SATA flatcables. They're too weak, so they break during the normal lifespan if you do a bit more than one or two reinstalls with them; and let's face it, with SSDs this likelihood has also increased as the devices are much more easily swapped around, or taken portable; slots on boards now offer hotswap sockets for SATA; etc.

And the examples like it are rapidly getting new friends: Intel's IHS bending; thermal pads needed on GPUs to avoid failure; etc etc.

The issue is shitty bend mechanics yes, but at the core of all these issues is one simple thing: cost reduction at the expense of OUR safety and durability of devices. Be wary what you cheer for - saying 30 cycles is fine because its the same as molex is not appreciating how PC gaming has evolved. Specs should evolve along with it.
Posted on Reply
#81
ThrashZone
Hi,
Should of made the plastic tab longer if that was a minimum bend point
Hell just extend it down to so it acts like a leg to hold the big bitch up to :laugh:
Posted on Reply
#82
Vayra86
Solaris17You should never pass up the opportunity to enlighten. Within reason of course. There is always the chance it expands his thinking.
That also goes both ways, I have to agree that it is pretty odd seeing the comments of people here wrt products, and a lot of that is happening in the 'top end' segment - but then that's my view. It all depends on your perspective, some want the latest greatest no matter what, and there is no common sense involved. The fact you are different, does not make it a rule; and yes, I think the lack of sense in some minds is also an opportunity to enlighten.

We all have our lenses to view the world through, (un?)fortunately. Nobody's right. Or wrong. But the social feedback is generally how norm and normality are formed.

Still though I don't think its entirely honest to your budgetting or yourself to say buying the top end is price conscious. Its really not, all costs increase because you've set the bar that high and the $/fps is worst in the top, and that problem only gets bigger if you compare gen-to-gen for similar performance, and its perfectly possible to last a similar number of years with something one or two notches lower in the stack, and barely notice the difference. Especially today, where the added cost of cooling and other requirements can amount many hundreds of extra dollars.

That said, buying 'high end' is definitely more price conscious than cheaping out and then getting forced into an upgrade because you really can't run stuff proper in two-three years time. But there is nuance here; an x90 was never a good idea - only when they go on sale like AMD's 69xx's do now. Its the same as buying on launch; tech depreciates too fast to make it worthwhile. You mentioned cars yourself. Similar loss of value applies - the moment you drive off...
Solaris17You know the connector and decision was not nvidia's but I do have to wonder, what their engineering team was thinking. Certainly they were given the same kind of information (+35mm) I mean, someone SOMEWHERE had to have put this in a case and been like "Man we should idk, put this at the back of the card right??"
Yeah, or you could decide to offer and design your very own Nvidia branded cable doing the same thing but with somewhat greater tolerances. One could say their margins and their leadership position kind of makes that an expectation even. Nvidia is always first foot in the door when it comes to pushing tech ahead... They still sell Gsync modules even though the peasant spec is commonplace now, for example.

Everything just stinks of cutting corners, and in this segment, IMHO, thats instant disqualification.

Also, back of the card? Where exactly? There's no PCB on the better half of it, right?
Posted on Reply
#83
ThrashZone
Vayra86Oh? We have numerous powerful ITX builds with high end components going about. Smaller cases can dissipate heat fine...

And thats the core of the issue here: a trend happening with pc components where higher power draw changes the old rules regarding what is possible and what is not. There is no guidance on that, from Nvidia either. They just assume you will solve the new DIY build problems that might arise from the specs they devised.

The very same thing is happening in CPU. And for what? To run the hardware way out of its efficiency curve, they are skirting the limits of what is possible out of the box to justify a ridiculous price point for a supposed performance edge you might never reach.

Components have landed in nonsense territory on the top end to keep the insatiable hunger of commerce afloat.
Hi,
Yep seen a few
Thing is they are usually vertical mounting gpu's.

I was referring to standard mounting.
Posted on Reply
#84
john_
Star_HunterIf Nvidia would have just set this card's TDP to 350W instead of 450W it would have had 97% of the 450W level of performance. That would have enabled them to have a smaller cooler and therefore more room for the power connection and avoiding all this mess. Not sure if they just did this because of concern from RDNA3 but feel they really should picked a better spot on the cards power efficiency curve. If someone wants more performance, simply have them use water cooling and overclock.
Well, it makes sense, but today it couldn't happen.
First, a 3% difference is enough to decide who is in 1st place of charts and who is second. And people on the Internet at least, are happy to announce the top card a monster and the 3% slower a failure, even if that second card is for example 30% more efficient. So, Intel first, because of their 10nm problems that forced them to compete while still on 14nm, Nvidia latter that probably decided to pocket all the money and leave nothing to AIBs and now also AMD that has realized that targeting for efficiency is suicidal, push their factory overclocks as high as they can.
There was no chance in a million Nvidia to come out with a Founders Edition at 350W and let it's partners produce custom models that go at 450W or higher.
Posted on Reply
#85
zlobby
Mwa-ha-ha-ha-ha! :roll:

Posted on Reply
#87
GreiverBlade
oh, i am fine, then ...


clearance ok, PSU? errrrr..... nah, no chance in hell :p changed it recently enough to not care about ATX 3.0 PSU which are all but available anywhere atm (aside one model from Thermaltake which is also close to be 3.5x time the price paid for the current i have ) and it seems AMD will keep with the 8pins for higher end (6pins for lower models) and i hope they stick to it given 1. the price, 2. the issues seen recently

although ... i HAVE the clearance! that's more important, oh and i know since my first self built, that tight cable bend is a bad bad thing ... and not only for PCs (specially with low quality cables/extensions, some cable handled steep curve better than others ... )

on a 6+8 Zotac GTX 770 some years ago
Posted on Reply
#88
TheinsanegamerN
Dirt ChipAMD will need to fallow this power standard design if they want to stay competitive in the high-end.
AMD wont need to "fallow" this, considering the vast majority of PSUs still use 8 pin connectors. They can just use 8 pins and offer a 12VHPWR to 8 pin adapter for all the suckers that ran out and bought a new PSU
Posted on Reply
#89
GreiverBlade
TheinsanegamerNAMD wont need to "fallow" this, considering the vast majority of PSUs still use 8 pin connectors. They can just use 8 pins and offer a 12VHPWR to 8 pin adapter for all the suckers that ran out and bought a new PSU
well they ... "Fall O[ut of that] W[hacko]" new connector, i guess ...
Posted on Reply
#90
TechLurker
I find it humorous that despite being a joint NVIDIA/Intel design, even Intel didn't use it for their higher-end ARC cards, despite there being "lower power" versions of the 12-pin connector.

That said, why not just shift to use EPS12V? Higher-end 1k+ PSUs can power 2 or more EPS12V lines by default (depending on the modular options), and EPS12V 4-pin can handle 155 watts continuous while the 8-pin can handle 235-250 watts continuous. Still would require 3x 8-pin connectors for 600-700 watt cards, but at least the output per 8-pin went up from the PCIe limit of 150 watts, and having some PSU makers swap PCIe connectors for extra EPS12V isn't much different than also designing ATX 3.0 PSUs with a dedicated but potentially faulty 12VHPWR connection. If anything, Seasonic's higher-end PSUs can do both EPS12V or PCIe from the same modular port, so it could be adapted quickly. And most importantly, all EPS12V wires and contacts are of a thicker gauge than the 12VHPWR wires and contacts.
Posted on Reply
#91
erocker
*
Dirt ChipAMD will need to fallow this power standard design if they want to stay competitive in the high-end.
Not in the slightest.
Posted on Reply
#92
Dave65
phanbueywhat a horrible design... hoping amd is competitive this time.
This time?
Pretty sure AMD is/has been competitive.
Posted on Reply
#93
efikkan
Dirt ChipAccording to who?
Because you have many number of examples that works without a problem and the tiny thing of a long validation process by electrical engineers.
You know, it can be smaller and better. CPU`s does that all the time.
So you think that cables can just get smaller and smaller while drawing more and more current?
It's been quite a few years since I went to school, but I didn't catch the news that the laws of physics were broken? So unless someone uses materials with better conductivity it's very predictable how much heat will be generated from a cable with a given grade and power draw.
AquinusMaybe instead of making new connectors and by treating the symptom, we should probably invest time into having hardware detect these high resistance situations so a user can take action before stuff starts melting or catching fire. Ultimately this is a state that needs immediate action and even with the best of connectors, something can still go wrong. Regardless of connector, I'd like be aware of this situation should it arise before it causes damage.
That sounds very much like treating a symptom rather than solving the underlying cause. Even if the cause was a small increase in resistance, how would you precisely detect this from either the PSU or the GPU end? (keeping in mind both tolerances in the spec of each part and the fact that the current draw is changing very rapidly) You can't just do this the same way a multimeter would do it; by sending a small current and measuring the voltage drop to calculate resistance. Are there more advanced (and reliable) techniques which goes beyond my knowledge of electronics which would make your proposal even feasible?

I'm more a fan of doing good engineering to create a robust design rather than overengineering a complex solution to compensate for a poor design.

The one thing that doesn't sound quite right in my mind is the claim that this whole problem is cables that are not fully seated causing extreme heat to melt down the plug, and I would like to see a proper in-depth analysis of this rather than jumping to conclusions. From what I've seen of power plugs (of any type/size) over the years, heat issues on the connective part is unusual, unless we are talking about not making a connection at all causing arcing, but that's with higher voltages. But with 12V and this wire gauge the threshold between "good enough" and no connection will be very tiny, probably less than 1 mm. So if this was the core problem, then engineering a solution would be fairly easy by either making the cables stick better in the plug or making the connecting area a tiny bit larger. Keep in mind that with most types of plugs the connection is usually far better than the wire, so unless it's physically damaged there shouldn't be an issue with connectivity. (Also remember that the electrons moves on the outside of the write and the surface area of the connection in most plugs is significantly larger than the wire gauge in most plugs, probably >10x+++).
So the explanation so far sounds a little bit off to me.
MachineLearningThis article's tone is pretty condescending. It doesn't take "arm wrestling" to make the connector burn up, it's just poorly designed.
Even me with my thick fingers would probably manage to do this unintentionally.
I would call this poor engineering, not user error.
Posted on Reply
#94
TheoneandonlyMrK
efikkanSo you think that cables can just get smaller and smaller while drawing more and more current?
It's been quite a few years since I went to school, but I didn't catch the news that the laws of physics were broken? So unless someone uses materials with better conductivity it's very predictable how much heat will be generated from a cable with a given grade and power draw.


That sounds very much like treating a symptom rather than solving the underlying cause. Even if the cause was a small increase in resistance, how would you precisely detect this from either the PSU or the GPU end? (keeping in mind both tolerances in the spec of each part and the fact that the current draw is changing very rapidly) You can't just do this the same way a multimeter would do it; by sending a small current and measuring the voltage drop to calculate resistance. Are there more advanced (and reliable) techniques which goes beyond my knowledge of electronics which would make your proposal even feasible?

I'm more a fan of doing good engineering to create a robust design rather than overengineering a complex solution to compensate for a poor design.

The one thing that doesn't sound quite right in my mind is the claim that this whole problem is cables that are not fully seated causing extreme heat to melt down the plug, and I would like to see a proper in-depth analysis of this rather than jumping to conclusions. From what I've seen of power plugs (of any type/size) over the years, heat issues on the connective part is unusual, unless we are talking about not making a connection at all causing arcing, but that's with higher voltages. But with 12V and this wire gauge the threshold between "good enough" and no connection will be very tiny, probably less than 1 mm. So if this was the core problem, then engineering a solution would be fairly easy by either making the cables stick better in the plug or making the connecting area a tiny bit larger. Keep in mind that with most types of plugs the connection is usually far better than the wire, so unless it's physically damaged there shouldn't be an issue with connectivity. (Also remember that the electrons moves on the outside of the write and the surface area of the connection in most plugs is significantly larger than the wire gauge in most plugs, probably >10x+++).
So the explanation so far sounds a little bit off to me.


Even me with my thick fingers would probably manage to do this unintentionally.
I would call this poor engineering, not user error.
I agree but not because the parts are in any way bad IMHO, though with little experience that counts for even less than it's usual nothing.

But I think with better positioning, or an adequate adapter should have been essential free provided addins.
Posted on Reply
#95
Wirko
How long before the industry discovers that 12 volts is far too low a voltage, for GPUs and CPUs alike?
Posted on Reply
#96
Sisyphus
Electrical plug connections should not be exposed to any mechanical stress. Consequences: Poor contact resistance, broken cable, cracks in the soldering points. Who does not know this, should not build PCs.
Posted on Reply
#97
Minus Infinity
Well done Huang, I hope this costs a bomb to reengineer. RDNA 7900XT looking better everyday. I don't reallt care about the numbers, I know it will destroy RDNA2 and Ampere and my 2080 Super and 1080 Ti, so that is all that matters to me. I don't need it to out do the 4090 at all. With RT being improved over 100% and with FSR it'll be fine for anything I can throw at it.
Posted on Reply
#98
JAB Creations
The comments count is accompanied by a fire emoji, how appropriate.
Posted on Reply
#99
QUANTUMPHYSICS
So who wants to fix this by building a hardened, angled adapter?
Posted on Reply
#100
LabRat 891
Pretty standard advice. But, you wouldn't know that unless you've wrangled a lot of SATA, coaxial, fibre, or other 'bend radius'-sensitive fine pitch cabling.

All the little oversights in moving to this new connector exemplify and confirm my 'feelings' on modern engineering at large. -practical considerations are set well below 'the book'.
Who cares, right? Not like they're liable for damages, that's on the manufacturer and end user...
Posted on Reply
Add your own comment
Jun 2nd, 2024 22:10 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts