Sunday, October 30th 2022

PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

Despite sticking with PCI-Express Gen 4 as its host interface, the NVIDIA GeForce RTX 4090 "Ada" graphics card standardizes the new 12+4 pin ATX 12VHPWR power connector, even across custom-designs by NVIDIA's add-in card (AIC) partners. This tiny connector is capable of delivering 600 W of power continuously, and briefly take 200% excursions (spikes). Normally, it should make your life easier as it condenses multiple 8-pin PCIe power connectors into one neat little connector; but in reality the connector is proving to be quite impractical. For starters, most custom RTX 4090 graphics cards have their PCBs being only two-thirds of the actual card length, which puts the power connector closer to the middle of the graphics card, making it aesthetically unappealing, but then there's a bigger problem, as uncovered by Buildzoid of Actually Hardcore Overclocking, an expert with PC hardware power-delivery designs.

CableMod, a company that specializes in custom modular-PSU cables targeting the case-modding community and PC enthusiasts, has designed a custom 12VHPWR cable that plugs into multiple 12 V output points on a modular PSU, converting them to a 16-pin 12VHPWR. It comes with a pretty exhaustive set of dos and don'ts; the latter are more relevant: apparently, you should not try to arm-wrestle with an 12VHPWR connector: do not attempt to bend the cable horizontally or vertically close to the connector, but leave a distance of at least 3.5 cm (1.37-inch). This ensures reduced pressure on the contacts in the connector. Combine this with the already tall RTX 4090 graphics cards, and you have yourself a power connector that's impractical for most standard-width mid-tower cases (chassis), with no room for cable-management. Attempting to "wrestle" with the connector, and somehow bending it to your desired shape, will cause improper contacts, which pose a fire-hazard.
Update Oct 26th: There are multiple updates to the story.

The 12VHPWR connector is a new standard, which means most PSUs in the market lack it, much in the same way as PSUs some 17 years ago lacked PCIe power connectors; and graphics cards included 4-pin Molex-to-PCIe adapters. NVIDIA probably figured out early on when implementing this connector that it cannot rely on adapters by AICs or PSU vendors to perform reliably (i.e. not cause problems with their graphics cards, resulting in a flood of RMAs); and so took it upon itself to design an adapter that converts 8-pin PCIe connectors to a 12VHPWR, which all AICs are required to include with their custom-design RTX 4090 cards. This adapter is rightfully overengineered by NVIDIA to be as reliable as possible, and NVIDIA even includes a rather short service-span of 30 connections and disconnections; before the contacts of the adapter begin to wear out and become unreliable. The only problem with NVIDIA's adapter is that it is ugly, and ruins the aesthetics of the otherwise brilliant RTX 4090 custom designs; which means a market is created for custom adapters.

Update 15:59 UTC: A user on Reddit who goes by "reggie_gakil" posted pictures of a GeForce RTX 4090 graphics card with with a burnt out 12VHPWR. While the card itself is "fine" (functional); the NVIDIA-designed adapter that converts 4x 8-pin PCIe to 12VHPWR, has a few melted pins that are probably caused due to improper contact, causing them to overheat or short. "I don't know how it happened but it smelled badly and I saw smoke. Definetly the Adapter who had Problems as card still seems to work," goes the caption with these images.

Update Oct 26th: Aris Mpitziopoulos, our associate PSU reviewer and editor of Hardware Busters, did an in-depth video presentation on the issue, where he details how the 12VHPWR design may not be at fault, but extreme abuse by end-users attempting to cable-manage their builds. Mpitziopoulos details the durability of the connector in its normal straight form, versus when tightly bent. You can catch the presentation on YouTube here.

Update Oct 26th: In related news, AMD confirmed that none of its upcoming Radeon RX 7000 series RDNA3 graphics cards features the 12VHPWR connector, and that the company will stick to 8-pin PCIe connectors.

Update Oct 30th: Jon Gerow, aka Jonny Guru, has posted a write-up about the 12VHPWR connector on his website. It's an interesting read with great technical info.
Sources: Buildzoid (Twitter), reggie_gakil (Reddit), Hardware Busters (YouTube)
Add your own comment

230 Comments on PSA: Don't Just Arm-wrestle with 16-pin 12VHPWR for Cable-Management, It Will Burn Up

#126
rv8000
Does this make anyone else wish that all pcie gpu based power connectors where right angle connectors in the first place.

Cable training/management has always put some kind of force on gpus in opposition to the pcie slot in my experience, especially in smaller mid tower cases and sff builds.

Im guessing there’s a good reason they haven’t. I wonder if even changing the orientation of the pcie plug would have been a better solution in the long run (vertically off the pcb in either up/down direction like a motherboard power connector). Seems like an odd limitation to stand by when I think about it.
Posted on Reply
#127
Punkenjoy
rv8000Does this make anyone else wish that all pcie gpu based power connectors where right angle connectors in the first place.

Cable training/management has always put some kind of force on gpus in opposition to the pcie slot in my experience, especially in smaller mid tower cases and sff builds.

Im guessing there’s a good reason they haven’t. I wonder if even changing the orientation of the pcie plug would have been a better solution in the long run (vertically off the cob in either up/down direction like a mother power connector). Seems like an odd limitation to stand by when I think about it.
Yes ! this, but the focus was always on getting the best cooler and it's the position that limit the less airflow and radiator size for cooling. Now that they have massively oversize cooler, they could do something better, but if you have a 4 slot GPU, that is quite deep to go plug your card if the connector was flat on the board instead of at a 90° angle. (unless they make it extra long).

for having it facing the top, it's another story, you would have to have a mezzanine card to prevent it from sticking out on the back of the card. not impossible but way more complex and again, they would need to secure that in place to avoid frying your board.

But god, that would look way better than that.

The best alternative would probably be at the end of the card, but again, due to cooling, the PCB no longer extend to the end of the cards as they allow air to go thru there.

At this point, why now redoing the whole PCI-E connector to allow it to deliver 600+ watt and just have a beefier connector on the motherboard.

No simple solution right now
Posted on Reply
#128
OneMoar
There is Always Moar
PunkenjoyYes ! this, but the focus was always on getting the best cooler and it's the position that limit the less airflow and radiator size for cooling. Now that they have massively oversize cooler, they could do something better, but if you have a 4 slot GPU, that is quite deep to go plug your card if the connector was flat on the board instead of at a 90° angle. (unless they make it extra long).

for having it facing the top, it's another story, you would have to have a mezzanine card to prevent it from sticking out on the back of the card. not impossible but way more complex and again, they would need to secure that in place to avoid frying your board.

But god, that would look way better than that.

The best alternative would probably be at the end of the card, but again, due to cooling, the PCB no longer extend to the end of the cards as they allow air to go thru there.

At this point, why now redoing the whole PCI-E connector to allow it to deliver 600+ watt and just have a beefier connector on the motherboard.

No simple solution right now
because pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind
Posted on Reply
#129
jonnyGURU
NihillimWasn't this a collaborative effort between Intel and PCI-SIG?
No. Not Intel. They didn't add the connector to ATX 3.0 until after it was finalized by the consortium.
Posted on Reply
#130
Jism
the54thvoidThis is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.



Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.
I actually noticed that one of my PCI-E cables (8 Pin) had a good amount of corrosion. The way i detected it was HWInfo reporting up to 11V on the 12V VRM Input rail. Like that coud'nt be good. Once i swapped out the cable it was a clean 12V again.

So yeah, it's real. They get worn out over the amount of times its installed and removed. The more i guess at the cost of any coating on top of the metal pins or so.
Posted on Reply
#131
ThrashZone
the54thvoidThis is the same industry-standard cycle as for many molex connectors. i.e., not an issue for the normal end-user.



Scare-mongering (or lack of due-diligence) isn't helpful when trying to remain a reliable tech site.
Hi,
Every psu I've bought to date included 6 vga cables
When will nvidia include 6 adapters ?
Or will psu makers send the same 6 cables for new gpu's
Posted on Reply
#132
TheoneandonlyMrK
OneMoarbecause pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind
Didn't apple manage it with Vega.
All on the PCB.

IE pciex ish but with additional power connector.
Posted on Reply
#134
Jism
TheoneandonlyMrKDidn't apple manage it with Vega.
All on the PCB.

IE pciex ish but with additional power connector.
Pretty much all the hardware apple releases, it's proprietary and they can design their own standard pretty much. They dont have to stick to ATX or PCI-E specs.
Posted on Reply
#135
TheoneandonlyMrK
JismPretty much all the hardware apple releases, it's proprietary and they can design their own standard pretty much. They dont have to stick to ATX or PCI-E specs.
Does that answer my question with a yes.

I had heard of apple and they're walled garden so nothing you just said did I not know.

So if it's been done, it could be again or did apple patent inline PCB power connection somehow.
Posted on Reply
#136
Redwoodz
Very simple, Nvidia overstepped what is acceptable power draw in an ATX PC. This is design failure.
Posted on Reply
#137
Jism
TheoneandonlyMrKDoes that answer my question with a yes.

I had heard of apple and they're walled garden so nothing you just said did I not know.

So if it's been done, it could be again or did apple patent inline PCB power connection somehow.
Apple is pretty much computers on the go. You buy and you dont have to look for ways to upgrade or "build it yourself" type of thing. It's very easy and pretty much from the go. However apple is so different in terms of software that most windows users coud'nt manage themself on a apple in the first place.

And inline PCB power is'nt something new.



Look for OAM formfactor. It's capable of more then 500W of power delivery per "card" pretty much. There's servers out there with 8 of those things stacked into it:



Servers that require almost 4KW of power at it's full operation.
Posted on Reply
#138
TheoneandonlyMrK
JismApple is pretty much computers on the go. You buy and you dont have to look for ways to upgrade or "build it yourself" type of thing. It's very easy and pretty much from the go. However apple is so different in terms of software that most windows users coud'nt manage themself on a apple in the first place.

And inline PCB power is'nt something new.



Look for OAM formfactor. It's capable of more then 500W of power delivery per "card" pretty much. There's servers out there with 8 of those things stacked into it:



Servers that require almost 4KW of power at it's full operation.
Next you'll tell me PCB have been invented for building circuits and that apple isn't a fruit.

Wtaf do you really think I asked without knowing this stuff ?!

I just wasn't sure if Apple used something like it on the dual Vega card they had.

And despite two replys I'm still not 100%.
Posted on Reply
#139
Jism
PCB's have bin invented for building circuits.
Posted on Reply
#140
Punkenjoy
OneMoarbecause pcbs are complicated and cramped enough without figuring out how to run traces big enough to reliably carry 50A the required distance without causing signaling problems
also redesigning pci-e spec because a single gpu from a single vendor ? are you outta your mind
Well, if you use 1 trace to carry a 12v, but if you use 12 trace, you just have to carry 5a.

Like said above, carrying such amount of power on very complex circuit board is already done on server side. This is not something impossible. very far from it.

The biggest challenge is how you address the transition to the new stardard, cards with 8pin/16pin + board power until enough motherboard have the new gpu slot. not impossible but just need multiple corporation to agree on a timeline.
Posted on Reply
#141
Dirt Chip
medi01No. Per their own words (links have been shared several times) it was SPECIFICALLY NVIDIA that designed power delivery aspect of that socket. Intel only did sensing part.
Well, if it's only NV work than OK and they are responsible from top to bottom. I really thought it was part of the ATX3.0 spec.

Anyway, It is still interested to know the details (if there any..) on how it was connected: was it fully "click" on? Dose the wire were bent and if so, was any force applied on the connector to make hime tilt as a result.

Unless we will see multiple incidens of non-bented yet melted connectors it can be classified as simple 'human error'.

Also, I can totally see this will be a deal breaker to some and more than that, the holy-grail of bashing ammo.
The Samsung exploding battery kind of things.

All of this and 4090ti, with it's 525w stock, is yet to come. Yummy!
Posted on Reply
#142
Godrilla
Not going to risk it and allow the cable to be straight with open case had the cables tucked away with slight curved bend with my h210 itx case. Can someone do a thermal test directly on cables with and without bend at max load obviously for short period of time?
Posted on Reply
#143
TheoneandonlyMrK
GodrillaNot going to risk it and allow the cable to be straight with open case had the cables tucked away with slight curved bend with my h210 itx case. Can someone do a thermal test directly on cables with and without bend at max load obviously for short period of time?
Have a word with yourself.

You imply you have one(4090).

Why would anyone though, other than a journalist, do this to their £1600$$ /260000$ :p purchase,.

Just to see.
Posted on Reply
#144
Keullo-e
S.T.A.R.S.
Nice. The cards are already huge bricks and yet it still needs even more space from their sides? After all, it wasn't that bad when the power connector(s) were in the back of the card like in the old days..
Posted on Reply
#145
Godrilla
TheoneandonlyMrKHave a word with yourself.

You imply you have one(4090).

Why would anyone though, other than a journalist, do this to their £1600$$ /260000$ :p purchase,.

Just to see.
itx case has challenges and I game with headphones on so it doesn't bother me. I school the journalists .:rockout::cool:
Posted on Reply
#146
Arkz
jonupMy math doesn't doesn't agree with your conclusion. Assuming you are correct that each pin is good for 8amp, 12pins at 12volts are good for 1152W. At 600w they should be handling a little over 4amps if the load is evenly spread, which it probably isn't. We still have plenty of headroom though.
You counting all of that power going in through 12 pins. there's 6 12v pins sharing 50A, then 6 ground pins returning.
Posted on Reply
#147
Crackong
www.igorslab.de/en/adapter-of-the-gray-analyzed-nvidias-brand-hot-12vhpwr-adapter-with-built-in-breakpoint/

Igor just confirmed the Built by Nvidia adaptor has such a high quality, that the wires are held (only) by soldering onto a very very thin piece of metal , and can be broken with very little force.
The card doesn't even know one of the soldering is broken, cause all the pins are joint inside the adaptor so the pin didn't "disconnected" , the load just spread to other wires and pump up the Amps (and wire temps) , it just keeps going until it melts.

What a wonderful design !



Posted on Reply
#148
jonnyGURU
Yeah. This is bad. Really bad.

Welp! Connectors from Corsair, beQuiet, CableMod, etc... don't use this method and use a standard crimp so.. buy them up guys! :D
Posted on Reply
#149
TheDeeGee
Glad this is sorted out then.

Now i'd like to know how CableMod does their cables.
Posted on Reply
#150
OneMoar
There is Always Moar
this is so incredibly stupid
a 1500 dollar card and they are using bonded tin on the connector
.
that being said if I where AIBS I would be installing a thermal probe on the connector to ensure the thing throttles or shuts down if it starts getting hot set it at 50-60c if the connector ever gets that hot there is a problem
Posted on Reply
Add your own comment
Jun 15th, 2024 18:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts